Corelab Seminar
2020-2021
Dimitris Tsipras
Robust Machine Learning: The Worst-Case and Beyond
Abstract.
One of the key challenges in the real-world deployment of
machine learning (ML) models is their brittleness: their performance
significantly degrades when exposed to even small variations of their
training environments.
How can we build ML models that are more robust?
In this talk, I will present a methodology for training models that
are invariant to a broad family of worst-case input perturbations. I
will then describe how such robust learning leads to models that learn
fundamentally different data representations, and how this can be
useful even outside the adversarial context. Finally, I will discuss
model robustness beyond the worst-case: ways in which our models fail
to generalize and how we can guide further progress on this front.